90 research outputs found
Abduction-Based Explanations for Machine Learning Models
The growing range of applications of Machine Learning (ML) in a multitude of
settings motivates the ability of computing small explanations for predictions
made. Small explanations are generally accepted as easier for human decision
makers to understand. Most earlier work on computing explanations is based on
heuristic approaches, providing no guarantees of quality, in terms of how close
such solutions are from cardinality- or subset-minimal explanations. This paper
develops a constraint-agnostic solution for computing explanations for any ML
model. The proposed solution exploits abductive reasoning, and imposes the
requirement that the ML model can be represented as sets of constraints using
some target constraint reasoning system for which the decision problem can be
answered with some oracle. The experimental results, obtained on well-known
datasets, validate the scalability of the proposed approach as well as the
quality of the computed solutions
On Tackling Explanation Redundancy in Decision Trees
Decision trees (DTs) epitomize the ideal of interpretability of machine
learning (ML) models. The interpretability of decision trees motivates
explainability approaches by so-called intrinsic interpretability, and it is at
the core of recent proposals for applying interpretable ML models in high-risk
applications. The belief in DT interpretability is justified by the fact that
explanations for DT predictions are generally expected to be succinct. Indeed,
in the case of DTs, explanations correspond to DT paths. Since decision trees
are ideally shallow, and so paths contain far fewer features than the total
number of features, explanations in DTs are expected to be succinct, and hence
interpretable. This paper offers both theoretical and experimental arguments
demonstrating that, as long as interpretability of decision trees equates with
succinctness of explanations, then decision trees ought not be deemed
interpretable. The paper introduces logically rigorous path explanations and
path explanation redundancy, and proves that there exist functions for which
decision trees must exhibit paths with arbitrarily large explanation
redundancy. The paper also proves that only a very restricted class of
functions can be represented with DTs that exhibit no explanation redundancy.
In addition, the paper includes experimental results substantiating that path
explanation redundancy is observed ubiquitously in decision trees, including
those obtained using different tree learning algorithms, but also in a wide
range of publicly available decision trees. The paper also proposes
polynomial-time algorithms for eliminating path explanation redundancy, which
in practice require negligible time to compute. Thus, these algorithms serve to
indirectly attain irreducible, and so succinct, explanations for decision
trees
DPLL+ROBDD Derivation Applied to Inversion of Some Cryptographic Functions
Abstract. The paper presents logical derivation algorithms that can be applied to inversion of polynomially computable discrete functions. The proposed approach is based on the fact that it is possible to organize DPLL derivation on a small subset of variables appeared in a CNF which encodes the algorithm computing the function. The experimental results showed that arrays of conflict clauses generated by this mode of derivation, as a rule, have efficient ROBDD representations. This fact is the departing point of development of a hybrid DPLL+ROBDD derivation strategy: derivation techniques for ROBDD representations of conflict databases are the same as those ones in common DPLL (variable assignments and unit propagation). In addition, compact ROBDD representations of the conflict databases can be shared effectively in a distributed computing environment
Delivering Inflated Explanations
In the quest for Explainable Artificial Intelligence (XAI) one of the
questions that frequently arises given a decision made by an AI system is,
``why was the decision made in this way?'' Formal approaches to explainability
build a formal model of the AI system and use this to reason about the
properties of the system. Given a set of feature values for an instance to be
explained, and a resulting decision, a formal abductive explanation is a set of
features, such that if they take the given value will always lead to the same
decision. This explanation is useful, it shows that only some features were
used in making the final decision. But it is narrow, it only shows that if the
selected features take their given values the decision is unchanged. It's
possible that some features may change values and still lead to the same
decision. In this paper we formally define inflated explanations which is a set
of features, and for each feature of set of values (always including the value
of the instance being explained), such that the decision will remain unchanged.
Inflated explanations are more informative than abductive explanations since
e.g they allow us to see if the exact value of a feature is important, or it
could be any nearby value. Overall they allow us to better understand the role
of each feature in the decision. We show that we can compute inflated
explanations for not that much greater cost than abductive explanations, and
that we can extend duality results for abductive explanations also to inflated
explanations
- …